59 research outputs found

    A sensitivity analysis of the PAWN sensitivity index

    Get PDF
    The PAWN index is gaining traction among the modelling community as a sensitivity measure. However, the robustness to its design parameters has not yet been scrutinized: the size (N) and sampling (Δ) of the model output, the number of conditioning intervals (n) or the summary statistic (Ξ). Here we fill this gap by running a sensitivity analysis of a PAWN-based sensitivity analysis. We compare the results with the design uncertainties of the Sobol’ total-order index (S*Ti). Unlike in S*Ti, the design uncertainties in PAWN create non-negligible chances of producing biased results when ranking or screening inputs. The dependence of PAWN upon (N, n, Δ, Ξ) is difficult to tame, as these parameters interact with one another. Even in an ideal setting in which the optimum choice for (N, n, Δ, Ξ) is known in advance, PAWN might not allow to distinguish an influential, non-additive model input from a truly non-influential model input

    Quantitative storytelling in the making of a composite indicator

    Get PDF
    The reasons for and against composite indicators are briefly reviewed, as well as the available theories for their construction. After noting the strong normative dimension of these measures—which ultimately aim to ‘tell a story’, e.g. to promote the social discovery of a particular phenomenon, we inquire whether a less partisan use of a composite indicator can be proposed by allowing more latitude in the framing of its construction. We thus explore whether a composite indicator can be built to tell ‘more than one story’ and test this in practical contexts. These include measures used in convergence analysis in the field of cohesion policies and a recent case involving the World Bank’s Doing Business Index. Our experiments are built to imagine different constituencies and stakeholders who agree on the use of evidence and of statistical information while differing on the interpretation of what is relevant and vital

    Silver as a constraint for a large-scale development of solar photovoltaics? Scenario-making to the year 2050 supported by expert engagement and global sensitivity analysis

    Get PDF
    In this study we assess whether availability of silver could constrain a large-scale deployment of solar photovoltaics (PV). While silver-paste use in photovoltaics cell metallization is becoming more efficient, solar photovoltaics power capacity installation is growing at an exponential pace. Along photovoltaics, silver is also employed in an array of industrial and non-industrial applications. The trends of these uses are examined up to the year 2050. The technical coefficients for the expansion in photovoltaics power capacity and contraction in silver paste use have been assessed through an expert-consultation process. The trend of use in the non-PV sectors has been estimated through an ARIMA (auto-regressive integrated moving average) model. The yearly and cumulative silver demand are evaluated against the technological potential for increasing silver mining and the estimates of its global natural availability, respectively. The model implemented is tested with a quasi-random Monte Carlo variance-based global sensitivity analysis. The result of our inquiry is that silver may not represent a constraint for a very-large-scale deployment of photovoltaics (up to tens TW in installed power capacity) provided the present decreasing trend in the use of silver paste in the photovoltaics sector continues at an adequate pace. Silver use in non-photovoltaic sectors plays also a tangible influence on potential constraints. In terms of natural constraints, most of the uncertainty is dependent on the actual estimates of silver natural budget

    Is VARS more intuitive and efficient than Sobol' indices?

    Full text link
    The Variogram Analysis of Response Surfaces (VARS) has been proposed by Razavi and Gupta as a new comprehensive framework in sensitivity analysis. According to these authors, VARS provides a more intuitive notion of sensitivity and it is much more computationally efficient than Sobol' indices. Here we review these arguments and critically compare the performance of VARS-TO, for total-order index, against the total-order Jansen estimator. We argue that, unlike classic variance-based methods, VARS lacks a clear definition of what an "important" factor is, and show that the alleged computational superiority of VARS does not withstand scrutiny. We conclude that while VARS enriches the spectrum of existing methods for sensitivity analysis, especially for a diagnostic use of mathematical models, it complements rather than substitutes classic estimators used in variance-based sensitivity analysis.Comment: Currently under review in Environmental Modelling & Softwar

    Current models underestimate future irrigated areas

    Get PDF
    Predictions of global irrigated areas are widely used to guide strategies that aim to secure environmental welfare and manage climate change. Here we show that these predictions, which range between 240 and 450 million hectares (Mha), underestimate the potential extension of irrigation by ignoring basic parametric and model uncertainties. We found that the probability distribution of global irrigated areas in 2050 spans almost half an order of magnitude (∌300–800 Mha, P2.5,P97.5), with the right tail pushing values to up to ∌1,800 Mha. This uncertainty is mostly irreducible as it is largely caused by either population‐related parameters or the assumptions behind the model design. Model end‐users and policy makers should acknowledge that irrigated areas are likely to grow much more than previously thought in order to avoid underestimating potential environmental costs.publishedVersio

    Are the results of the groundwater model robust?

    Get PDF
    De Graaf et al. (2019) suggest that groundwater pumping will bring 42--79\% of worldwide watersheds close to environmental exhaustion by 2050. We are skeptical of these figures due to several non-unique assumptions behind the calculation of irrigation water demands and the perfunctory exploration of the model's uncertainty space. Their sensitivity analysis reveals a widespread lack of elementary concepts of design of experiments among modellers, and can not be taken as a proof that their conclusions are robust.Comment: Comment on the paper by De Graaf et al. 2019. Environmental flow limits to global groundwater pumping. Nature 574 (7776), 90-9

    Is it possible to improve existing sample-based algorithm to compute the total sensitivity index?

    Get PDF
    Variance-based sensitivity indices have established themselves as a reference among practitioners of sensitivity analysis of model output. It is not unusual to consider a variance-based sensitivity analysis as informative if it produces at least the first order sensitivity indices ¿¿j and the so-called total-effect sensitivity indices ¿¿j for all the uncertain factors of the mathematical model under analysis. Computational economy is critical in sensitivity analysis. It depends mostly upon the number of model evaluations needed to obtain stable values of the estimates. While efficient estimation procedures independent from the number of factors under analysis are available for the first order indices, this is less the case for the total sensitivity indices. When estimating Tj , one can either use a sample-based approach, whose computational cost depends on the number of factors, or approaches based on meta-modelling/emulators, e.g. based on Gaussian processes. The present work focuses on sample-based estimation procedures for Tj and tries different avenues to achieve an algorithmic improvement over the designs proposed in the existing best practices. We conclude that some proposed sample-based improvements found in the literature do not work as claimed, and that improving on the existing best practice is indeed fraught with difficulties. We motivate our conclusions introducing the concepts of explorativity and efficiency of the design
    • 

    corecore